Futurism
5 months agoScientists Warn That AI Threatens Science Itself
Scientists should abstain from using text-generating large language models (LLMs) in scientific research because of the risk of misinformation.
LLMs prioritize being helpful and convincing over accuracy and alignment with fact.
Anthropomorphizing LLMs and trusting them as truth-tellers poses a unique danger to the future of science. [ more ]